QEBVerif: Quantization Error Bound Verification of Neural Networks

نویسندگان

چکیده

Abstract To alleviate the practical constraints for deploying deep neural networks (DNNs) on edge devices, quantization is widely regarded as one promising technique. It reduces resource requirements computational power and storage space by quantizing weights and/or activation tensors of a DNN into lower bit-width fixed-point numbers, resulting in quantized (QNNs). While it has been empirically shown to introduce minor accuracy loss, critical verified properties might become invalid once quantized. Existing verification methods focus either individual (DNNs or QNNs) error bound partial quantization. In this work, we propose method, named , where both are consists two parts, i.e., differential reachability analysis (DRA) mixed-integer linear programming (MILP) based method. DRA performs difference between its counterpart layer-by-layer compute tight interval efficiently. If fails prove bound, then encode problem an equivalent MILP which can be solved off-the-shelf solvers. Thus, sound, complete, reasonably efficient. We implement conduct extensive experiments, showing effectiveness efficiency.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Prediction of Red Mud Bound-Soda Losses in Bayer Process Using Neural Networks

In the Bayer process, the reaction of silica in bauxite with caustic soda causes the loss of great amount of NaOH. In this research, the bound-soda losses in Bayer process solid residue (red mud) are predicted using intelligent techniques. This method, based on the application of regression and artificial neural networks (AAN), has been used to predict red mud bound-soda losses in Iran Alumina C...

متن کامل

Verification of Binarized Neural Networks

We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energyefficient alternative to traditional learning networks. The verification of BNNs, using the reduction to hardware verification, can be even more scalable by factoring computations among neurons within the same layer. By proving the NP-hardness of finding optimal factoring...

متن کامل

Resiliency of Deep Neural Networks under Quantization

The complexity of deep neural network algorithms for hardware implementation can be much lowered by optimizing the word-length of weights and signals. Direct quantization of floating-point weights, however, does not show good performance when the number of bits assigned is small. Retraining of quantized networks has been developed to relieve this problem. In this work, the effects of quantizati...

متن کامل

Lyapunov Stability Analysis of the Quantization Error for DCS Neural Networks

In this paper we show that the quantization error for Dynamic Cell Structures (DCS) Neural Networks (NN) as defined by Bruske and Sommer provides a measure of the Lyapunov stability of the weight centers of the neural net. We also show, however, that this error is insufficient in itself to verify that DCS neural networks provide stable topological representation of a given fixed input feature m...

متن کامل

Handwritten signature verification based on neural 'gas' based vector quantization

This paper propose a vector quantization (VQ) technique to solve the problem of handwritten signature verijication. A neural ‘gas’ model is trained to establish a reference set for each registered person with handwritten signature samples. Then a test sample is compared with all the prototypes in the reference set and the system outputs the label of the writer of the word. Several difSerent fea...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-37703-7_20